由于其数值益处增加及其坚实的数学背景,光谱聚类方法的非线性重构近来的关注。我们在$ p $ -norm中提出了一种新的直接多道谱聚类算法,以$ p \ in(1,2] $。计算图表的多个特征向量的问题$ p $ -laplacian,标准的非线性概括Graph Laplacian,被重用作为Grassmann歧管的无约束最小化问题。$ P $的价值以伪连续的方式减少,促进对应于最佳图形的稀疏解决方案载体作为$ P $接近。监测单调减少平衡图削减了我们从$ P $ -Levels获得的最佳可用解决方案的保证。我们展示了我们算法在各种人工测试案件中的算法的有效性和准确性。我们的数值和比较结果具有各种状态-Art聚类方法表明,所提出的方法在均衡的图形剪切度量和标签分配的准确性方面取得高质量的集群。此外,我们进行S面部图像和手写字符分类的束缚,以展示现实数据集中的适用性。
translated by 谷歌翻译
Data-centric artificial intelligence (data-centric AI) represents an emerging paradigm emphasizing that the systematic design and engineering of data is essential for building effective and efficient AI-based systems. The objective of this article is to introduce practitioners and researchers from the field of Information Systems (IS) to data-centric AI. We define relevant terms, provide key characteristics to contrast the data-centric paradigm to the model-centric one, and introduce a framework for data-centric AI. We distinguish data-centric AI from related concepts and discuss its longer-term implications for the IS community.
translated by 谷歌翻译
The evolution of wireless communications into 6G and beyond is expected to rely on new machine learning (ML)-based capabilities. These can enable proactive decisions and actions from wireless-network components to sustain quality-of-service (QoS) and user experience. Moreover, new use cases in the area of vehicular and industrial communications will emerge. Specifically in the area of vehicle communication, vehicle-to-everything (V2X) schemes will benefit strongly from such advances. With this in mind, we have conducted a detailed measurement campaign with the purpose of enabling a plethora of diverse ML-based studies. The resulting datasets offer GPS-located wireless measurements across diverse urban environments for both cellular (with two different operators) and sidelink radio access technologies, thus enabling a variety of different studies towards V2X. The datasets are labeled and sampled with a high time resolution. Furthermore, we make the data publicly available with all the necessary information to support the on-boarding of new researchers. We provide an initial analysis of the data showing some of the challenges that ML needs to overcome and the features that ML can leverage, as well as some hints at potential research studies.
translated by 谷歌翻译
This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT
translated by 谷歌翻译
Lidar-based SLAM systems perform well in a wide range of circumstances by relying on the geometry of the environment. However, even mature and reliable approaches struggle when the environment contains structureless areas such as long hallways. To allow the use of lidar-based SLAM in such environments, we propose to add reflector markers in specific locations that would otherwise be difficult. We present an algorithm to reliably detect these markers and two approaches to fuse the detected markers with geometry-based scan matching. The performance of the proposed methods is demonstrated on real-world datasets from several industrial environments.
translated by 谷歌翻译
For Lifelong SLAM, one has to deal with temporary localization failures, e.g., induced by kidnapping. We achieve this by starting a new map and merging it with the previous map as soon as relocalization succeeds. Since relocalization methods are fallible, it can happen that such a merge is invalid, e.g., due to perceptual aliasing. To address this issue, we propose methods to detect and undo invalid merges. These methods compare incoming scans with scans that were previously merged into the current map and consider how well they agree with each other. Evaluation of our methods takes place using a dataset that consists of multiple flat and office environments, as well as the public MIT Stata Center dataset. We show that methods based on a change detection algorithm and on comparison of gridmaps perform well in both environments and can be run in real-time with a reasonable computational cost.
translated by 谷歌翻译
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps. Thus, they can easily generalize and adapt to new and changing environments. Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting, making it difficult for them to imitate human behavior in case of versatile demonstrations. Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility. To facilitate generalization to novel task configurations, we do not directly match the agent's and expert's trajectory distributions but rather work with concise geometric descriptors which generalize well to unseen task configurations. We empirically validate our method on various robot tasks using versatile human demonstrations and compare to imitation learning algorithms in a state-action setting as well as a trajectory-based setting. We find that the geometric descriptors greatly help in generalizing to new task configurations and that combining them with our distribution-matching objective is crucial for representing and reproducing versatile behavior.
translated by 谷歌翻译
使用高斯混合模型(GMM)的变异推断能够学习可侵入性目标分布的高度扣除但多模式的近似值。 GMM与最多数百个维度的问题设置特别相关,例如机器人技术,用于对轨迹或联合分布进行建模。这项工作着重于基于GMM的两种非常有效的方法,这些方法既采用独立的自然梯度更新来为单个组件和权重的分类分布。我们首次表明,尽管它们的实际实现和理论保证有所不同,但他们的派生更新是等效的。我们确定了几种设计选择,可以区分两种方法,即在样本选择,自然梯度估计,步骤适应以及信任区域是否得到强制或适应的组件数量方面。我们对这些设计选择进行广泛的消融,并表明它们强烈影响了优化的效率和学习分布的可变性。基于我们的见解,我们提出了对广义框架的新颖实例化,该实例将一阶自然梯度估计与信任区域和组件适应相结合,并且在我们所有实验中都显着优于以前的两种方法。
translated by 谷歌翻译